22 research outputs found

    Who is Reading Whom Now: Privacy in Education from Books to MOOCs

    Get PDF
    This Article is the most comprehensive study to date of the policy issues and privacy concerns arising from the surge of ed tech innovation. It surveys the burgeoning market of ed tech solutions, which range from free Android and iPhone apps to comprehensive learning management systems and digitized curricula delivered via the Internet. It discusses the deployment of big data analytics by education institutions to enhance student performance, evaluate teachers, improve education techniques, customize programs, and better leverage scarce resources to optimize education results. This Article seeks to untangle ed tech privacy concerns from the broader policy debates surrounding standardization, the Common Core, longitudinal data systems, and the role of business in education. It unpacks the meaning of commercial data uses in schools, distinguishing between behavioral advertising to children and providing comprehensive, optimized education solutions to students, teachers, and school systems. It addresses privacy problems related to small data --the individualization enabled by optimization solutions that read students even as they read their books-as well as concerns about big data analysis and measurement, including algorithmic biases, discreet discrimination, narrowcasting, and chilling effects. This Article proposes solutions ranging from deployment of traditional privacy tools, such as contractual and organizational governance mechanisms, to greater data literacy by teachers and parental involvement. It advocates innovative technological solutions, including converting student data to a parent-accessible feature and enhancing algorithmic transparency to shed light on the inner working of the machine. For example, individually curated data backpacks would empower students and their parents by providing them with comprehensive portable profiles to facilitate personalized learning regardless of where they go. This Article builds on a methodology developed in the authors\u27 previous work to balance big data rewards against privacy risks, while complying with several layers of federal and state regulation

    Big Data for All: Privacy and User Control in the Age of Analytics

    Get PDF
    We live in an age of “big data.” Data have become the raw material of production, a new source for immense economic and social value. Advances in data mining and analytics and the massive increase in computing power and data storage capacity have expanded by orders of magnitude the scope of information available for businesses and government. Data are now available for analysis in raw form, escaping the confines of structured databases and enhancing researchers’ abilities to identify correlations and conceive of new, unanticipated uses for existing information. In addition, the increasing number of people, devices, and sensors that are now connected by digital networks has revolutionized the ability to generate, communicate, share, and access data. Data creates enormous value for the world economy, driving innovation, productivity, efficiency, and growth. At the same time, the “data deluge” presents privacy concerns which could stir a regulatory backlash dampening the data economy and stifling innovation. In order to craft a balance between beneficial uses of data and individual privacy, policymakers must address some of the most fundamental concepts of privacy law, including the definition of “personally identifiable information,” the role of individual control, and the principles of data minimization and purpose limitation. This article emphasizes the importance of providing individuals with access to their data in usable format. This will let individuals share the wealth created by their information and incentivize developers to offer user-side features and applications harnessing the value of big data. Where individual access to data is impracticable, data are likely to be de-identified to an extent sufficient to diminish privacy concerns. In addition, since in a big data world it is often not the data but rather the inferences drawn from them that give cause for concern, organizations should be required to disclose their decisional criteria

    Beyond IRBs: Ethical Guidelines for Data Research

    Full text link

    Big Data for All: Privacy and User Control in the Age of Analytics

    Get PDF
    We live in an age of “big data.” Data have become the raw material of production, a new source for immense economic and social value. Advances in data mining and analytics and the massive increase in computing power and data storage capacity have expanded by orders of magnitude the scope of information available for businesses and government. Data are now available for analysis in raw form, escaping the confines of structured databases and enhancing researchers’ abilities to identify correlations and conceive of new, unanticipated uses for existing information. In addition, the increasing number of people, devices, and sensors that are now connected by digital networks has revolutionized the ability to generate, communicate, share, and access data. Data creates enormous value for the world economy, driving innovation, productivity, efficiency, and growth. At the same time, the “data deluge” presents privacy concerns which could stir a regulatory backlash dampening the data economy and stifling innovation. In order to craft a balance between beneficial uses of data and individual privacy, policymakers must address some of the most fundamental concepts of privacy law, including the definition of “personally identifiable information,” the role of individual control, and the principles of data minimization and purpose limitation. This article emphasizes the importance of providing individuals with access to their data in usable format. This will let individuals share the wealth created by their information and incentivize developers to offer user-side features and applications harnessing the value of big data. Where individual access to data is impracticable, data are likely to be deidentified to an extent sufficient to diminish privacy concerns. In addition, since in a big data world it is often not the data but rather the inferences drawn from them that give cause for concern, organizations should be required to disclose their decisional criteria

    Privacy of Personal Things in Active Learning Spaces Need Individually Evolved Requirements

    Get PDF
    Technology-enhanced active learning (TEAL) spaces could represent a significant benefit to learning and teaching at universities. TEAL spaces support students in projecting presentations (e.g. from smart-phones) and sharing notes (e.g. from smart-watches) with peers. Importantly, this sharing is partly amongst their co-present small group but sometimes to the whole class. However, plugging personal things into smart spaces whose first requirement is to accept as many devices as possible is not without consequence. A projected notification of a political conversation, for example, has the potential to harm the individual both within the space and beyond, opening them to unwanted judgment, criticism and assessment. The traditional argument from the usable security community is that of intervention prior to any use whatever: users need to be trained, taught and/or nudged to avoid such problems. We conducted an informal focus group with students in a pilot TEAL space, exploring issues around the privacy and security of using personal devices in such spaces. The reality is that it is hard to perceive the privacy and security challenges prior to using the space. We argue that such prior interventions are not only a significant barrier to student adoption of smart spaces, but ineffective in ensuring the safety of individuals in the long-term. We argue that in designing smart spaces, both on-campus and off, designers need to adopt an approach of individually evolved privacy requirements to ensure an on-going safe, creative space for students. Two important features are: (a) as a small group develops bonds, its privacy level needs to be reduced over time and (b) the best privacy level depends on the whether the screen is currently shared with the small group or the large class

    Taming The Golem: Challenges of Ethical Algorithmic Decision-Making

    No full text

    Droit à l’oubli: Canadian Perspective on the Global ‘Right to be Forgotten’ Debate

    No full text
    European Courts have recognized a “right to be forgotten” (RTBF) that would allow individuals to stop data search engines or other third parties from providing links to information about them deemed irrelevant, no longer relevant, inadequate or excessive. There is a lack of consensus between the EU and the U.S. on the legitimacy of this right, which illustrates the cultural transatlantic clash on the issue of the importance of privacy versus other rights, such as freedom of information and freedom of speech. This is problematic given that privacy regulators in Europe have also pressed for a broad view of this right, seeking to extend it globally, requesting that information not only be delisted from European extensions, but from all extensions. Some are raising that such an extraterritorial effect not only allows someone from a different jurisdiction or country to erase information that they perceive as “irrelevant” or “illegitimate” based on their own set of values; it also arguably promotes one culture’s value of individual privacy rights over other cultures’ value of free expression. While the Canadian Charter of rights provides constitutional protection to fundamental freedoms such as freedom of expression, Canada has also adopted data protection laws which are similar to the European Directive 95/46/EC. This paper explores whether importing a RTBF would be legal in Canada. The authors argue that such a right may be unconstitutional in Canada: it would most likely infringe upon freedom of expression in a way that cannot be demonstrably justified under the Canadian Constitution. They also argue that the legal framework in Quebec addresses some of the privacy and reputational concerns that a RTBF is meant to address through a “public interest” test, although they acknowledge that there are some limits to this framework. The notions of res judicata and periods of limitations must be revisited to ensure that this privacy framework can adequately address the fact that with the Internet, data can outlive the context in which they were published and considered legitimate. The fact that the data that was once considered outdated may become relevant again over time should also be considered. The authors warn against entrusting private entities with the tasks of arbitrating fundamental rights and values and determining what is in the public interest, with little or no government or judicial oversight
    corecore